Using Domain Adversarial Learning for Text Captchas Recognition
نویسندگان
چکیده
منابع مشابه
Named Entity Recognition in Persian Text using Deep Learning
Named entities recognition is a fundamental task in the field of natural language processing. It is also known as a subset of information extraction. The process of recognizing named entities aims at finding proper nouns in the text and classifying them into predetermined classes such as names of people, organizations, and places. In this paper, we propose a named entity recognizer which benefi...
متن کاملAdversarial Multi-task Learning for Text Classification
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task lear...
متن کاملMultinomial Adversarial Networks for Multi-Domain Text Classification
Many text classification tasks are known to be highly domain-dependent. Unfortunately, the availability of training data can vary drastically across domains. Worse still, for some domains there may not be any annotated data at all. In this work, we propose a multinomial adversarial network (MAN) to tackle the text classification problem in this real-world multidomain setting (MDTC). We provide ...
متن کاملImage Recognition CAPTCHAs
CAPTCHAs are tests that distinguish humans from software robots in an online environment [3, 14, 7]. We propose and implement three CAPTCHAs based on naming images, distinguishing images, and identifying an anomalous image out of a set. Novel contributions include proposals for two new CAPTCHAs, the first user study on image recognition CAPTCHAs, and a new metric for evaluating CAPTCHAs.
متن کاملAdversarial Objectives for Text Generation
Language models can be used to generate text by iteratively sampling words conditioned on previously sampled words. In this work, we explore adversarial objectives to obtain good text generations by training a recurrent language model to keep its hidden state statistics during sampling similar to what it has sees during maximum likelihood estimation (MLE) training. We analyze the convergence of...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the Institute for System Programming of the RAS
سال: 2020
ISSN: 2079-8156,2220-6426
DOI: 10.15514/ispras-2020-32(4)-15